65 research outputs found

    GRAB: A Dataset of Whole-Body Human Grasping of Objects

    Full text link
    Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting an object, we capture the motion of the entire body and adopt the generalized notion of "whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size. Given MoCap markers, we fit the full 3D body shape and pose, including the articulated face and hands, as well as the 3D object pose. This gives detailed 3D meshes over time, from which we compute contact between the body and object. This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task. We illustrate the practical value of GRAB with an example application; we train GrabNet, a conditional generative network, to predict 3D hand grasps for unseen 3D object shapes. The dataset and code are available for research purposes at https://grab.is.tue.mpg.de.Comment: ECCV 202

    Principal components analysis based control of a multi-dof underactuated prosthetic hand

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Functionality, controllability and cosmetics are the key issues to be addressed in order to accomplish a successful functional substitution of the human hand by means of a prosthesis. Not only the prosthesis should duplicate the human hand in shape, functionality, sensorization, perception and sense of body-belonging, but it should also be controlled as the natural one, in the most intuitive and undemanding way. At present, prosthetic hands are controlled by means of non-invasive interfaces based on electromyography (EMG). Driving a multi degrees of freedom (DoF) hand for achieving hand dexterity implies to selectively modulate many different EMG signals in order to make each joint move independently, and this could require significant cognitive effort to the user.</p> <p>Methods</p> <p>A Principal Components Analysis (PCA) based algorithm is used to drive a 16 DoFs underactuated prosthetic hand prototype (called CyberHand) with a two dimensional control input, in order to perform the three prehensile forms mostly used in Activities of Daily Living (ADLs). Such Principal Components set has been derived directly from the artificial hand by collecting its sensory data while performing 50 different grasps, and subsequently used for control.</p> <p>Results</p> <p>Trials have shown that two independent input signals can be successfully used to control the posture of a real robotic hand and that correct grasps (in terms of involved fingers, stability and posture) may be achieved.</p> <p>Conclusions</p> <p>This work demonstrates the effectiveness of a bio-inspired system successfully conjugating the advantages of an underactuated, anthropomorphic hand with a PCA-based control strategy, and opens up promising possibilities for the development of an intuitively controllable hand prosthesis.</p

    Manipulation planning under changing external forces

    Get PDF
    This paper presents a planner that enables robots to manipulate objects under changing external forces. Particularly, we focus on the scenario where a human applies a sequence of forceful operations, e.g. cutting and drilling, on an object that is held by a robot. The planner produces an efficient manipulation plan by choosing stable grasps on the object, by intelligently deciding when the robot should change its grasp on the object as the external forces change, and by choosing subsequent grasps such that they minimize the number of regrasps required in the long-term. Furthermore, as it switches from one grasp to the other, the planner solves the bimanual regrasping in the air by using an alternating sequence of bimanual and unimanual grasps. We also present a conic formulation to address force uncertainties inherent in human-applied external forces, using which the planner can robustly assess the stability of a grasp configuration without sacrificing planning efficiency. We provide a planner implementation on a dual-arm robot and present a variety of simulated and real human-robot experiments to show the performance of our planner

    Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands.</p> <p>Methods</p> <p>The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances.</p> <p>Results</p> <p>The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only).</p> <p>Conclusions</p> <p>The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic control eases the burden from the user and, as a result, the user can concentrate on what he/she does, not on how he/she should do it. The tests showed that the performance of the controller was satisfactory and that the users were able to operate the system with minimal prior training.</p

    An Agent-Based Approach for Manufacturing Enterprise Integration and Supply Chain Management

    No full text
    Improving supply chain management is very important for increasing competitive position and profitability. Manufacturing enterprises are now moving towards open architectures for integrating their activities with those of their suppliers, customers and partners within wide supply chain networks. This paper presents an agentbased approach for manufacturing enterprise integration and supply chain management to meet such requirements. A hybrid agent-based architecture is proposed, the main features of such an architecture are then described, and a prototype implementation is presented. Keywords Enterprise integration, distributed manufacturing systems, supply chain management, agent, multi-agent systems, mediator 1 INTRODUCTION Manufacturing enterprises are confronted with growing competition, the evolution of new markets and increasingly complex global political and economic scenarios. The need for cheaper and more cost-effective products is evident. Improving supply chain management..

    Design Knowledge Collection by Modeling

    No full text
    corecore